Back

Neurobiology of Language

MIT Press

All preprints, ranked by how well they match Neurobiology of Language's content profile, based on 28 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.

1
The entire brain, more or less is at work: 'Language regions' are artefacts of averaging

Aliko, S.; Wang, B.; Small, S. L.; Skipper, J. I.

2023-09-03 neuroscience 10.1101/2023.09.01.555886 medRxiv
Top 0.1%
18.5%
Show abstract

Models of the neurobiology of language suggest that a small number of anatomically fixed brain regions are responsible for language functioning. This derives from centuries of aphasia studies and decades of neuroimaging. The latter rely on thresholded measures of central tendency applied to activity patterns from heterogeneous stimuli. We hypothesised that these methods obscure the whole brain distribution of regions supporting language. Specifically, language regions are connectivity hubs, coordinating varying peripheral activity that averages out following thresholding. We tested this with neuroimaging meta-analyses and movie-fMRI. Results show that words localise to language regions when averaged but are distributed throughout the brain when examining specific linguistic representations. These language regions are partially connectivity hubs that are spatiotemporally dynamic, making connections with [~]40% of the brain periphery outside those regions, and only appearing in the aggregate over time. Hub-periphery connections encode linguistic representations, not the language regions alone. Results replicate with audiobook-fMRI. Finally, intracranial neuronal recordings support findings, showing language regions are hub-like, with linguistic representations decodable throughout the brain. Together, these four studies suggest that language regions are artefacts of averaging heterogeneous language representations. Instead, they are connectivity hubs coordinating whole-brain distributed network representations, suggesting why their damage results in aphasia.

2
Structural Brain Correlates of Speech Disfluency in Early Childhood: A Dimensional Analysis in a Non-Clinical Cohort

Jolly, A.; Yli-Savola, A.; Pulli, E.; Saloranta, E.; Railo, H.; Merisaari, H.; Saukko, E.; Silver, E.; Kumpulainen, V.; Copeland, A.; Karlsson, H.; Karlsson, L.; Junttila, N.; Mainela-Arnold, E.; Tuulari, J.

2025-08-22 neuroscience 10.1101/2025.08.18.670817 medRxiv
Top 0.1%
17.6%
Show abstract

Most neuroimaging studies of speech disfluency have compared individuals who stutter with fluent controls. However, treating speech disfluency as a continuous, dimensional trait offers new insights into the neural basis of fluency during early childhood. This study aimed to investigate whether naturally occurring variation in speech disfluency is associated with grey matter structure in a non-clinical, population-based sample of 5-year-old children. The study included 120 participants (65M, 55F) from the FinnBrain Birth Cohort study. Speech disfluency was evaluated as a continuous measure from audiovisual speech samples, with transcription and analysis conducted using the SALT software. Ambrose & Yairi (1999) classification system was used to categorize speech disfluencies into stuttering-like (SLD) and other disfluency types. T1-weighted images obtained through magnetic resonance imaging were analyzed using voxel-based morphometry (VBM) with the CAT12 toolbox and complemented by surface-based morphometry with FreeSurfer. Whole-brain statistical analysis was employed to examine the association between grey matter metrics and speech disfluency. We found that VBM-derived proportional grey matter volume in the left middle frontal gyrus, left posterior cerebellum, and right superior frontal gyrus was positively associated with speech disfluency, specifically SLD, in children (p <.001; p=.002; p <.001, FDR corrected). No significant associations were found for cortical thickness or surface area. Additionally, no notable sex differences were observed. Our findings suggest that speech disfluency in early childhood is linked to localized structural differences in regions supporting motor planning and cognitive control, without broader changes in cortical thickness or surface area. Importantly, similar brain regions have been implicated in studies comparing children who stutter to those who do not, suggesting that normal variation in disfluency captures meaningful neurobiological differences even in non-clinical populations. This supports the value of treating speech disfluency as a spectrum and underscores the importance of longitudinal, multimodal research to clarify how these structural features evolve and influence later fluency outcomes.

3
How can graph theory inform the dual-stream model of speech processing? a resting-state fMRI study of post-stroke aphasia

Zhu, H.; Fitzhugh, M. C.; Keator, L. M.; Johnson, L.; Rorden, C.; Bonilha, L.; Fridriksson, J.; Rogalsky, C.

2023-04-19 neuroscience 10.1101/2023.04.17.537216 medRxiv
Top 0.1%
14.8%
Show abstract

The dual-stream model of speech processing has been proposed to represent the cortical networks involved in speech comprehension and production. Although it is arguably the prominent neuroanatomical model of speech processing, it is not yet known if the dual-stream model represents actual intrinsic functional brain networks. Furthermore, it is unclear how disruptions after a stroke to the functional connectivity of the dual-stream models regions are related to specific types of speech production and comprehension impairments seen in aphasia. To address these questions, in the present study, we examined two independent resting-state fMRI datasets: (1) 28 neurotypical matched controls and (2) 28 chronic left-hemisphere stroke survivors with aphasia collected at another site. Structural MRI, as well as language and cognitive behavioral assessments, were collected. Using standard functional connectivity measures, we successfully identified an intrinsic resting-state network amongst the dual-stream models regions in the control group. We then used both standard functional connectivity analyses and graph theory approaches to determine how the functional connectivity of the dual-stream network differs in individuals with post-stroke aphasia, and how this connectivity may predict performance on clinical aphasia assessments. Our findings provide strong evidence that the dual-stream model is an intrinsic network as measured via resting-state MRI, and that weaker functional connectivity of the hub nodes of the dual-stream network defined by graph theory methods, but not overall average network connectivity, is weaker in the stroke group than in the control participants. Also, the functional connectivity of the hub nodes predicted specific types of impairments on clinical assessments. In particular, the relative strength of connectivity of the right hemispheres homologues of the left dorsal stream hubs to the left dorsal hubs versus right ventral stream hubs is a particularly strong predictor of post-stroke aphasia severity and symptomology.

4
Verbal working memory and syntactic comprehension segregate into the dorsal and ventral streams

Matchin, W.; Mollasaraei, Z. K.; Bonilha, L.; Rorden, C.; Hickok, G.; den Ouden, D.; Fridriksson, J.

2024-05-05 neuroscience 10.1101/2024.05.05.592577 medRxiv
Top 0.1%
13.8%
Show abstract

Syntactic processing and verbal working memory are both essential components to sentence comprehension. Nonetheless, the separability of these systems in the brain remains unclear. To address this issue, we performed causal-inference analyses based on lesion and connectome network mapping using MRI and behavioral testing in 103 individuals with chronic post-stroke aphasia. We employed a rhyme judgment task with heavy working memory load without articulatory confounds, controlling for the overall ability to match auditory words to pictures and to perform a metalinguistic rhyme judgment, isolating the effect of working memory load. We assessed noncanonical sentence comprehension, isolating syntactic processing by incorporating residual rhyme judgment performance as a covariate for working memory load. Voxel-based lesion analyses and structural connectome-based lesion symptom mapping controlling for total lesion volume were performed, with permutation testing to correct for multiple comparisons (4,000 permutations). We observed that effects of working memory load localized to dorsal stream damage: posterior temporal-parietal lesions and frontal-parietal white matter disconnections. These effects were differentiated from syntactic comprehension deficits, which were primarily associated with ventral stream damage: lesions to temporal lobe and temporal-parietal white matter disconnections, particularly when incorporating the residual measure of working memory load as a covariate. Our results support the conclusion that working memory and syntactic processing are associated with distinct brain networks, largely loading onto dorsal and ventral streams, respectively.

5
Neural representation of phonological wordform in bilateral posterior temporal cortex

Sorensen, D.; Avcu, E.; Lynch, S.; Ahlfors, S.; Gow, D.

2023-07-21 neuroscience 10.1101/2023.07.19.549751 medRxiv
Top 0.1%
13.8%
Show abstract

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words evoke activation of a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To localize wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with either word or nonword neighbors supported decoding in many brain regions during an early analysis window (100-400 ms) reflecting primarily incremental phonological processing. Training with word neighbors, but not nonword neighbors, supported decoding in a bilateral set of temporal lobe ROIs, in a later time window (400-600 ms) reflecting activation related to word recognition. These ROIs included bilateral posterior temporal regions implicated in wordform representation. Effective connectivity analyses among regions within this subset indicated that word-evoked activity influenced the decoding accuracy more than nonword-evoked activity did. Taken together, these results evidence functional representation of wordforms in bilateral temporal lobes isolated from phonemic or semantic representations.

6
An fMRI study of composition in noun and verb phrases

Bonnasse-Gahot, L.; Bemis, D.; Perez-Guevara, M.; Dehaene, S.; Pallier, C.

2025-12-17 neuroscience 10.64898/2025.12.17.694853 medRxiv
Top 0.1%
13.7%
Show abstract

How do the language areas of the human brain combine multiple words into meaningful phrases and sentences remains ill-understood. Here, to address this question, we determined the response profile of temporal and inferior frontal language areas to the composition of up to four words into phrases. We tested whether brain activity increases with the number of merged words, and whether this profile differs for noun and verb phrases. To this aim, we used fMRI to quantify the brain responses to individual noun and verb phrases of varying length and to tightly matched word lists. Increasing phrase length was associated to an increase in activation in all regions of the temporo-frontal language network. The effect was more pronounced for phrases built around verbs than for phrases built around nouns, suggesting that verbs involve a more complex syntactic tree structure than nouns. Even with word lists, several regions, notably the inferior frontal gyrus (IFG) pars triangularis and opercularis and the posterior superior temporal sulcus showed clear increases in activity with the length of sequences, although the words could not be merged into phrases. By contrast, other regions (IFG pars orbitalis, anterior temporal lobe, temporo-parietal junction) did not react to scrambled word lists. Those different functional response profiles inform theories of how composition is implemented in the human brain.

7
The language network is recruited but not required for non-verbal semantic processing

Ivanova, A. A.; Mineroff, Z.; Zimmerer, V.; Kanwisher, N.; Varley, R.; Fedorenko, E.

2019-07-09 neuroscience 10.1101/696484 medRxiv
Top 0.1%
10.5%
Show abstract

The ability to combine individual meanings into complex representations of the world is often associated with language. Yet people also construct combinatorial event-level representations from non-linguistic input, e.g. from visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of nonverbal events. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task vs. a difficult perceptual control task on sentences and line drawings that describe/depict simple agent-patient interactions. We found that the language network responded robustly during the semantic task but not during the perceptual control task. This effect was observed for both sentences and pictures (although the response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on a task of matching sentences and pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent-patient interactions. Overall, our results indicate that the left fronto-temporal language network is recruited but not necessary for semantic processing of nonverbal events.

8
Speaking in the brain: The interaction between words and syntax in producing sentences

Takashima, A.; Konopka, A.; Meyer, A.; Hagoort, P.; Weber, K.

2019-07-09 neuroscience 10.1101/696310 medRxiv
Top 0.1%
10.0%
Show abstract

This neuroimaging study investigated the neural infrastructure of sentence-level language production. We compared brain activation patterns, as measured with BOLD-fMRI, during production of sentences which differed in verb argument structures (intransitives, transitives, ditransitives) and the lexical status of the verb (known verbs or pseudo-verbs). An example for the type of sentence to be produced started a mini-block of six sentences with the same structure. For each trial, participants were first given the (pseudo-)verb followed by three geometric shapes to serve as verb arguments in the sentences. Production of sentences with known verbs yielded greater activation compared to those with pseudo-verbs in the core language network of left inferior frontal gyrus, the left posterior middle temporal gyrus, and a more posterior middle temporal region extending into the angular gyrus (LpMTG/AG), analogous to effects observed in language comprehension. Increasing the number of verb arguments led to greater activation in an overlapping left pMTG/AG area, particularly for known verbs, as well as in the bilateral precuneus. Thus, producing sentences with more complex structures using existing verbs lead to increased activation in the language network, suggesting some reliance on memory retrieval of stored lexical-syntactic information during sentence production. This study thus provides evidence from sentence-level language production in line with functional models of the language network that have so far been mainly based on single word production, comprehension and processing in aphasia.

9
Neural processing of natural speech by adults with and without dyslexia: Evidence for atypical cortical decoding of speech information in the delta and theta EEG bands

Keshavarzi, M.; Moore, B. C. J.; Goswami, U.

2026-02-19 neuroscience 10.64898/2026.02.18.706607 medRxiv
Top 0.1%
9.9%
Show abstract

Neural oscillations in the delta (0.5-4 Hz) and theta (4-8 Hz) bands play a key role in tracking the temporal structure of speech. According to Temporal Sampling (TS) theory, dyslexia arises from atypical entrainment of these low-frequency oscillations to speech during infancy and childhood, which is particularly disruptive regarding phonological encoding. However, studies of adults with dyslexia have rarely examined both delta and theta cortical tracking under naturalistic listening conditions, and have not measured delta-band cortical tracking. Using EEG, here we focused on delta and theta band cortical tracking continuous natural speech by adults with and without dyslexia, applying a decoding analysis previously used with dyslexic children. Forty-eight English-speaking adults (24 dyslexic, 24 control) listened to a 16-minute continuous spoken narrative while EEG was recorded. Neural decoding of the speech envelope was quantified using backward multivariate Temporal Response Function (mTRF) models applied at two levels: a between-group analysis evaluating group-level differences in neural representation patterns, and a within-participant analysis assessing individual decoding accuracy. Cerebro-acoustic coherence was computed in parallel to provide a complementary measure of neural-speech synchronisation. Additional analyses examined band power, cross-frequency phase-amplitude coupling (PAC), and cross-frequency phase-phase coupling (PPC). Dyslexic adults exhibited less accurate delta- and theta-band decoding in the between-group analysis and reduced theta-band decoding accuracy in the within-participant analysis, alongside reduced coherence in both bands and increased delta-band power, particularly over the right temporal region. No group differences were found for PAC or PPC. HighlightsO_LIAdults with dyslexia showed reduced delta- and theta-band speech decoding C_LIO_LICerebro-acoustic coherence was reduced in delta and theta bands in dyslexia group C_LIO_LIDelta-band power was increased in dyslexia, especially over right temporal region C_LIO_LICross-frequency coupling did not differ between adults with and without dyslexia C_LI

10
The P600, but not N400, is modulated by sustained attention

Contier, F.; Weymar, M.; Wartenburger, I.; Rabovsky, M.

2021-11-20 neuroscience 10.1101/2021.11.18.469143 medRxiv
Top 0.1%
9.9%
Show abstract

The functional significance of the two prominent language-related ERP components N400 and P600 is still under debate. It has recently been suggested that one important dimension along which the two vary is in terms of automaticity versus attentional control, with N400 amplitudes reflecting more automatic and P600 amplitudes reflecting more controlled aspects of sentence comprehension. The availability of executive resources necessary for controlled processes depends on sustained attention, which fluctuates over time. Here, we thus tested whether P600 and N400 amplitudes depend on the level of sustained attention. We re-analyzed EEG and behavioral data from a sentence processing task by Sassenhagen & Bornkessel-Schlesewsky (2015, Cortex), which included sentences with morphosyntactic and semantic violations. Participants read sentences phrase by phrase and indicated whether a sentence contained any type of anomaly as soon as they had the relevant information. To quantify the varying degree of sustained attention, we extracted a moving reaction time coefficient of variation over the entire course of the task. We found that the P600 amplitude was significantly larger during periods of low reaction time variability (high sustained attention) than in periods of high reaction time variability (low sustained attention). In contrast, the amplitude of the N400 was not affected by reaction time variability. These results thus suggest that the P600 component is sensitive to sustained attention while the N400 component is not, which provides independent evidence for accounts suggesting that P600 amplitudes reflect more controlled and N400 amplitudes more automatic aspects of sentence comprehension.

11
Syntactic representations in the human brain: beyond effort-based metrics

Reddy, A. J.; Wehbe, L.

2020-10-09 neuroscience 10.1101/2020.06.16.155499 medRxiv
Top 0.1%
9.9%
Show abstract

While studying semantics in the brain, neuroscientists use two approaches. One is to identify areas that are correlated with semantic processing load. Another is to find areas that are predicted by the semantic representation of the stimulus words. However, in the domain of syntax, most studies have focused only on identifying areas correlated with syntactic processing load. One possible reason for this discrepancy is that representing syntactic structure in an embedding space such that it can be used to model brain activity is a non-trivial computational problem. Another possible reason is that it is unclear if the low signal-to-noise ratio of neuroimaging tools such as functional Magnetic Resonance Imaging (fMRI) can allow us to reveal correlates of complex (and perhaps subtle) syntactic representations. In this study, we propose novel multi-dimensional features that encode information about the syntactic structure of sentences. Using these features and fMRI recordings of participants reading a natural text, we model the brain representation of syntax. First, we find that our syntactic structure-based features explain additional variance in the brain activity of various parts of the language system, even after controlling for complexity metrics that capture processing load. At the same time, we see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics.

12
Semantic representations during language comprehension are affected by context

Deniz, F.; Tseng, C.; Wehbe, L.; Gallant, J. L.

2021-12-16 neuroscience 10.1101/2021.12.15.472839 medRxiv
Top 0.1%
9.9%
Show abstract

The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared to stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime. Significance StatementContext is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuroimaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.

13
Revisiting atypical language lateralization in dyslexia

Verhelst, H.; Karlsson, E. M.; Gerrits, R.; Vingerhoets, G.

2025-05-07 neuroscience 10.1101/2025.05.06.652381 medRxiv
Top 0.1%
9.1%
Show abstract

Hemispheric lateralization has been central to developmental dyslexia research for over a century, yet its role in the etiology of reading and language deficits remains elusive. While altered asymmetries have long been implicated, evidence is inconsistent, with limited consideration given to individual variability in lateralization patterns. This study investigated hemispheric lateralization in 35 adults with dyslexia and 35 matched controls using functional MRI across three language tasks: word generation, rhyming decision, and lexical decision. Laterality indices (LIs) were calculated to comprehensively assess the strength, direction, and consistency of activation across global and regional task-specific brain areas. Significant group differences were not found in the absolute strength of lateralization for global measures or any regional measures, except in the fusiform gyrus, where people with dyslexia showed lower asymmetry. Directional asymmetry was similar across the two groups, except in the fusiform gyrus during the reading task, where dyslexic individuals showed a higher prevalence of right-hemisphere lateralization compared to controls. Interestingly, we found that dyslexic participants demonstrated greater inconsistency in regional lateralization during reading and rhyming tasks. Among individuals with dyslexia, those with inconsistent lateralization in the reading task had weaker fusiform lateralization, although fusiform LI strength itself did not predict reading outcomes. Our findings suggest that dyslexia is characterized by inconsistent, rather than universally weaker, lateralization patterns. Inconsistencies in task-related and regional lateralization may disrupt the efficiency of language networks, contributing to observed reading deficits. By highlighting the role of regional and task-specific inconsistencies, this study provides new insights into the neural mechanisms underlying dyslexia and underscores the importance of considering individual variability in hemispheric lateralization when investigating language disorders.

14
Brain-Informed Fine-Tuning for Improved Multilingual Understanding in Language Models

Negi, A.; Oota, S. R.; Gupta, M.; Deniz, F.

2025-07-10 neuroscience 10.1101/2025.07.07.662360 medRxiv
Top 0.1%
9.0%
Show abstract

Recent studies have demonstrated that fine-tuning language models with brain data can improve their semantic understanding, although these findings have so far been limited to English. Interestingly, similar to the shared multilingual embedding space of pretrained multilingual language models, human studies provide strong evidence for a shared semantic system in bilingual individuals. Here, we investigate whether fine-tuning language models with bilingual brain data changes model representations in a way that improves them across multiple languages. To test this, we fine-tune monolingual and multilingual language models using brain activity recorded while bilingual participants read stories in English and Chinese. We then evaluate how well these representations generalize to the bilingual participants first language, their second language, and several other languages that the participants are not fluent in. We assess the fine-tuned language models on brain encoding performance and downstream NLP tasks. Our results show that bilingual brain-informed fine-tuned language models outperform their vanilla (pretrained) counterparts in both brain encoding performance and most downstream NLP tasks across multiple languages. These findings suggest that brain-informed fine-tuning improves multilingual understanding in language models, offering a bridge between cognitive neuroscience and NLP research. We make our code publicly available. 2

15
Understanding and Improving Word Embeddings through a Neuroscientific Lens

Fereidooni, S.; Mocz, V.; Radev, D.; Chun, M.

2020-09-20 neuroscience 10.1101/2020.09.18.304436 medRxiv
Top 0.1%
8.4%
Show abstract

Despite the success of models making use of word embeddings on many natural language tasks, these models often perform significantly worse than humans on several natural language understanding tasks. This difference in performance motivates us to ask: (1) if existing word vector representations have any basis in the brains representational structure for individual words, and (2) whether features from the brain can be used to improve word embedding model performance, defined as their correlation with human semantic judgements. To answer the first question, we compare the representational spaces of existing word embedding models with that of brain imaging data through representational similarity analysis. We answer the second question by using regression-based learning to constrain word vectors to the features of the brain imaging data, thereby determining if these modified word vectors exhibit increased performance over their unmodified counterparts. To collect semantic judgements as a measure of performance, we employed a novel multi-arrangement method. Our results show that there is variance in the representational space of the brain imaging data that remains uncaptured by word embedding models, and that brain imaging data can be used to increase their coherence with human performance.

16
Does bilingualism buffer genetic predispositions to reading difficulties through alterations of structural interhemispheric connectivity? An ABCD Study.

Lallier, M.; Rius-Manau, C.; 23andMe Research Team, ; Carrion-Castillo, A.

2026-04-07 neuroscience 10.64898/2026.04.07.716864 medRxiv
Top 0.1%
8.4%
Show abstract

Here, we test the hypothesis that early sustained exposure to complex bilingual environments can positively affect reading development by altering structural interhemispheric connectivity via the corpus callosum (CC). Interhemispheric connectivity has been shown to be inefficient in dyslexia, but also to support compensatory pathways when genetic risk for reading difficulties is present, by enabling the preserved right hemisphere to support a dysfunctional left hemisphere. Mediation models were conducted on children aged 9-10 years (with a 2-year follow-up assessment) from the Adolescent Brain Cognitive Development database (N>10,000). Polygenic scores (PGS) for dyslexia and cognitive performance and continuous bilingualism indices were used as predictors, with reading aloud as the outcome. Bilingualism showed a positive effect on reading partially mediated by the anterior CC, independently of overall brain size. In contrast, genetic predispositions to reading difficulties influenced reading primarily through overall brain size rather than CC connectivity specifically. These two pathways were independent, suggesting that bilingual experience and genetic risk operate through distinct neuroanatomical mechanisms. These findings suggest that recurrent early exposure to complex bilingual environments may shape the brains structural connectivity toward a more balanced and integrated bilateral frontal organisation. The results highlight potential brain compensatory pathways induced by environmental experiences that may support more efficient reading development and mitigate risks for developmental dyslexia.

17
Neural substrates and behavioral relevance of speech envelope tracking: evidence from post-stroke aphasia

De Clercq, P.; Kries, J.; Vanthornhout, J.; Gerrits, R.; Francart, T.; Vandermosten, M.

2024-03-28 neuroscience 10.1101/2024.03.26.586859 medRxiv
Top 0.1%
8.3%
Show abstract

Neural tracking of the low-frequency temporal envelope of speech has emerged as a prominent tool to investigate the neural mechanisms of natural speech processing in the brain. However, there is ongoing debate regarding the functional role of neural envelope tracking. In this context, our study aims to offer a novel perspective by investigating the critical brain areas and behavioral skills required for neural envelope tracking in aphasia, a language disorder characterized by impaired neural envelope tracking. We analyzed an EEG dataset of 39 individuals with post-stroke aphasia suffering a left-hemispheric stroke who listened to natural speech. Our analysis involved lesion mapping, where left lesioned brain voxels served as binary features to predict neural envelope tracking measures. We also examined the behavioral correlates of receptive language, naming, and auditory processing (via rise time discrimination task) skills. The lesion mapping analysis revealed that lesions in language areas, such as the middle temporal gyrus, supramarginal gyrus and angular gyrus, were associated with poorer neural envelope tracking. Additionally, neural tracking was related to auditory processing skills and language (receptive and naming) skills. However, the effects on language skills were less robust, possibly due to ceiling effects in the language scores. Our [fi]ndings highlight the importance of central brain areas implicated in language understanding, extending beyond the primary auditory cortex, and emphasize the role of intact auditory processing and language abilities in effectively processing the temporal envelope of speech. Collectively, these [fi]ndings underscore the signi[fi]cance of neural envelope tracking beyond mere audibility and acoustic processes. Signi[fi]cance statementWhile some studies have proposed that neural envelope tracking primarily relates to audibility and acoustic speech processes, others have suggested its involvement in actual speech and language comprehension. By investigating the critical brain areas and behavioral skills essential in aphasia, we argue for a broader signi[fi]cance of neural envelope tracking in language processing. Furthermore, our [fi]ndings highlight a speci[fi]city among individuals with aphasia, indicating its correlation with lesions in temporal brain regions associated with receptive language functions. This addresses the signi[fi]cant heterogeneity in lesion characteristics present among individuals with aphasia and suggests the potential of neural tracking as an EEG-based tool for speci[fi]cally assessing receptive language abilities in this population.

18
MEG correlates of empty subjects in Japanese control and raising constructions

Yamaguchi, K.; Yamada, E.; Shigeto, H.; Ohta, S.

2026-01-09 neuroscience 10.64898/2026.01.08.698326 medRxiv
Top 0.1%
8.3%
Show abstract

Empty categories are unpronounced elements with syntactic properties that play a central role in theories of sentence structure. Although there are several types within these categories, the neural basis for distinguishing among them remains unclear. Using magnetoencephalography, we investigate whether the brain distinguishes between two Japanese sentence structures: Control and raising. Although these constructions appear similar on the surface, they are argued to involve different types of empty categories. In theoretical analyses, the control-type empty category, which is called PRO, is often treated as an anaphoric element, similar to reflexives such as himself and herself, whereas the raising-type empty category is a noun phrase trace. Twenty-six native Japanese speakers participated in a reading task under three experimental conditions: Control, raising, and baseline. Source estimates were computed, and condition differences were tested using spatiotemporal cluster-based permutation t-tests. We observed late left-hemispheric differences at approximately 700-800 ms after the critical verb. The control condition elicited larger responses than the raising condition, with activity centered in the temporal cortex spanning the middle temporal gyrus and the superior temporal sulcus and gyrus and extending into the anterior insula and the supramarginal gyrus. In addition, the control condition elicited larger late responses than the baseline condition in a broader left fronto-temporal distribution, including the inferior frontal cortex, anterior temporal cortex, and insula. These results provide source-level evidence that brain activity in the left language network differs between the control and raising conditions in Japanese during online sentence comprehension. Furthermore, they suggest that we can distinguish empty category types in the brain.

19
Interindividual differences in predicting words versus sentence meaning:Explaining N400 amplitudes using large-scale neural network models

Rabovsky, M.; Lopopolo, A.; Schad, D. J.

2025-06-07 neuroscience 10.1101/2025.06.03.657727 medRxiv
Top 0.1%
8.3%
Show abstract

Prediction error, both at the level of sentence meaning and at the level of the next presented word, has been shown to successfully account for N400 amplitudes. Here we address the question of whether people differ in the representational level at which they implicitly predict upcoming language. To this end, we compute a measure of prediction error at the level of sentence meaning (magnitude of change in hidden layer activation, termed semantic update, in a neural network model of sentence comprehension, the Sentence Gestalt model) and a measure of prediction error at the level of the next presented word (surprisal from a next word prediction language model). When using both measures to predict N400 amplitudes during the reading of naturalistic texts, results showed that both measures significantly accounted for N400 amplitudes even when the other measure was controlled for. Most important for current purposes, both effects were significantly negatively correlated such that people with a reversed or weak surprisal effect showed the strongest influence of semantic update on N400 amplitudes, and random-effects model comparison showed that individuals differ in whether their N400 amplitudes are driven by semantic update only, by surprisal only, or by both, and that the most common model in the population was either semantic update or the combined model but clearly not the pure surprisal model. The current approach of combining large-scale models implementing different theoretical accounts with advanced model comparison techniques enables fine-grained investigations into the computational processes underlying N400 amplitudes, including interindividual differences.

20
Functional characterization of the language network of polyglots and hyperpolyglots with precision fMRI

Malik-Moraleda, S.; Jouravlev, O.; Mineroff, Z.; Cucu, T.; Taliaferro, M.; Mahowald, K.; Blank, I. A.; Fedorenko, E.

2023-01-19 neuroscience 10.1101/2023.01.19.524657 medRxiv
Top 0.1%
8.2%
Show abstract

How do polyglots--individuals who speak five or more languages--process their languages, and what can this population tell us about the language system? Using fMRI, we identified the language network in each of 34 polyglots (including 16 hyperpolyglots with knowledge of 10+ languages) and examined its response to the native language, non-native languages of varying proficiency, and unfamiliar languages. All language conditions engaged all areas of the language network relative to a control condition. Languages that participants rated as higher-proficiency elicited stronger responses, except for the native language, which elicited a similar or lower response than a non-native language of similar proficiency. Furthermore, unfamiliar languages that were typologically related to the participants high-to-moderate-proficiency languages elicited a stronger response than unfamiliar unrelated languages. The results suggest that the language networks response magnitude scales with the degree of engagement of linguistic computations (e.g., related to lexical access and syntactic-structure building). We also replicated a prior finding of weaker responses to native language in polyglots than non-polyglot bilinguals. These results contribute to our understanding of how multiple languages co-exist within a single brain and provide new evidence that the language network responds more strongly to stimuli that more fully engage linguistic computations.